On Approximate Learning by Multi-layered Feedforward Circuits
نویسندگان
چکیده
We consider the problem of efficient approximate learning by multilayered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3, 5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and ε-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.
منابع مشابه
On Properties of Networks of Neuron-Like Elements
The complexity and computational capacity of multi-layered, feedforward neural networks is examined. Neural networks for special purpose (structured) functions are examined from the perspective of circuit complexity. Known results in complexity theory are applied to the special instance of neural network circuits, and in particular, classes of functions that can be implemented in shallow circui...
متن کاملMerging Echo State and Feedforward Neural Networks for Time Series Forecasting
Echo state neural networks, which are a special case of recurrent neural networks, are studied from the viewpoint of their learning ability, with a goal to achieve their greater prediction ability. A standard training of these neural networks uses pseudoinverse matrix for one-step learning of weights from hidden to output neurons. Such learning was substituted by backpropagation of error learni...
متن کاملInternal representations of multi-layered perceptrons
Feedforward neural networks make incomprehensible decisions resulting from mappings learned from training examples defined in high dimensional feature spaces. What kind of internal representations are developed by multi-layered perceptrons (MLPs) and how do they change during training? Scatterograms of the training data transformed to the hidden and the output space reveal the dynamics of learn...
متن کاملApplicability of approximate multipliers in hardware neural networks
In recent years there has been a growing interest in hardware neural networks, which express many benefits over conventional software models, mainly in applications where speed, cost, reliability, or energy efficiency are of great importance. These hardware neural networks require many resource-, powerand time-consuming multiplication operations, thus special care must be taken during their des...
متن کاملFastest Learning in Small-World Neural Networks
We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, an...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Theor. Comput. Sci.
دوره 348 شماره
صفحات -
تاریخ انتشار 2000